Meaningful Representations Prevent Catastrophic Interference

نویسندگان

  • Jordi Bieger
  • Ida Sprinkhuizen-Kuyper
  • Iris van Rooij
چکیده

Artificial Neural Networks (ANNs) attempt to mimic human neural networks in order to perform tasks. In order to do this, tasks need to be represented in ways that the network understands. In ANNs these representations are often arbitrary, whereas in humans it seems that these representations are often meaningful. This article shows how using more meaningful representations in ANNs can be very beneficial. We demonstrate that by using our Static Meaningful Representation Learning (SMRL) technique, ANNs can avoid the problem of catastrophic interference when sequentially learning multiple simple tasks. We also discuss how our approach overcomes known limitations of other techniques for dealing with catastrophic interference.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference

It is well known that when a connectionist network is trained on one set of patterns and then attempts to add new patterns to its repertoire, catastrophic interference may result. The use of sparse, orthogonal hidden-layer representations has been shown to reduce catastrophic interference. The author demonstrates that the use of sparse representations may, in certain cases, actually result in w...

متن کامل

Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks

In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct consequence of the overlap of the system’s distr...

متن کامل

Interactive tandem networks and the sequential learning problem

This paper presents a novel connectionist architecture to handle the "sensitivity-stability" problem and, in particular, an extreme manifestation of the problem, catastrophic interference. This architecture, called an interactive tandem-network (ITN) architecture, consists of two continually interacting networks, one — the LTM network — dynamically storing "prototypes" of the patterns learned, ...

متن کامل

Catastrophic Interference is Eliminated in Pretrained Networks

When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic interference (McCloskey & Cohen,1989; Ratcliff, 1990). The main cause of this interference is overlap among representations at the hidden unit layer (French, 1991; Hetherington,1991; Murre, 1992). This ca...

متن کامل

Effect of Sharpening on Hidden-Layer Activation Profiles Figure 2: Activation Profiles with various node-sharpenings

In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct consequence of the overlap of the system’s distr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009